On approximations via convolution-defined mixture models
نویسندگان
چکیده
منابع مشابه
Exponential Models: Approximations for Probabilities
Welch & Peers (1963) used a root-information prior to obtain posterior probabilities for a scalar parameter exponential model and showed that these Bayes probabilities had the confidence property to second order asymptotically. An important undercurrent of this indicates that the constant information reparameterization provides location model structure, for which the confidence property ...
متن کاملRobust Rigid Point Registration based on Convolution of Adaptive Gaussian Mixture Models
Matching 3D rigid point clouds in complex environments robustly and accurately is still a core technique used in many applications. This paper proposes a new architecture combining error estimation from sample covariances and dual global probability alignment based on the convolution of adaptive Gaussian Mixture Models (GMM) from point clouds. Firstly, a novel adaptive GMM is defined using prob...
متن کاملRobust Cluster Analysis via Mixture Models
Finite mixture models are being increasingly used to model the distributions of a wide variety of random phenomena and to cluster data sets. In this paper, we focus on the use of normal mixture models to cluster data sets of continuous multivariate data. As normality based methods of estimation are not robust, we review the use of t component distributions. With the t mixture model-based approa...
متن کاملConvolution spline approximations of Volterra integral equations
We derive a new “convolution spline” approximation method for convolution Volterra integral equations. This shares some properties of convolution quadrature, but instead of being based on an underlying ODE solver is explicitly constructed in terms of basis functions which have compact support. At time step tn = nh > 0, the solution is approximated in a “backward time” manner in terms of basis f...
متن کاملEstimating Mixture Models via Mixtures of Polynomials
Mixture modeling is a general technique for making any simple model more expressive through weighted combination. This generality and simplicity in part explains the success of the Expectation Maximization (EM) algorithm, in which updates are easy to derive for a wide class of mixture models. However, the likelihood of a mixture model is non-convex, so EM has no known global convergence guarant...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Communications in Statistics - Theory and Methods
سال: 2018
ISSN: 0361-0926,1532-415X
DOI: 10.1080/03610926.2018.1487069